本文介绍了Kings Arena的荣誉,Kings Arena是基于国王荣誉的强化学习(RL)环境,这是世界上最受欢迎的游戏之一。与以前大多数工作中研究的其他环境相比,我们的人对竞争性强化学习提出了新的概括挑战。与对手竞争的一个代理商是一个多代理的问题;它需要概括能力,因为它具有控制和不同的对手竞争的不同目标。我们描述了国王域名荣誉的观察,动作和奖励规范,并提供了一个基于python的开源界面,以与游戏引擎进行通信。我们为纪念国王竞技场的二十个目标英雄提供了各种任务,并为具有可行的计算资源的基于RL的方法提供了初始基线结果。最后,我们展示了国王竞技场的荣誉和对挑战的可能补救措施所面临的概括挑战。所有软件(包括环境级)均可在https://github.com/tencent-ailab/hok_env上公开获得。该文档可在https://aiarena.tencent.com/hok/doc/上获得。
translated by 谷歌翻译
图形卷积神经网络(GCN)吸引了越来越多的注意力,并在各种计算机视觉任务中取得了良好的表现,但是,对GCN的内部机制缺乏明确的解释。对于标准的卷积神经网络(CNN),通常使用类激活映射(CAM)方法通过生成热图来可视化CNN的决策和图像区域之间的连接。尽管如此,当这些凸轮直接应用于GCN时,这种热图通常会显示出语义 - chaos。在本文中,我们提出了一种新颖的可视化方法,特别适用于GCN,顶点语义类激活映射(VS-CAM)。 VS-CAM包括两个独立的管道,分别制作一组语义探针图和一个语义基映射。语义探针图用于检测语义信息从语义碱图图中的语义信息,以汇总语义感知的热图。定性结果表明,VS-CAM可以获得与基于CNN的CAM更精确地匹配对象的热图。定量评估进一步证明了VS-CAM的优势。
translated by 谷歌翻译
We present X-Decoder, a generalized decoding model that can predict pixel-level segmentation and language tokens seamlessly. X-Decodert takes as input two types of queries: (i) generic non-semantic queries and (ii) semantic queries induced from text inputs, to decode different pixel-level and token-level outputs in the same semantic space. With such a novel design, X-Decoder is the first work that provides a unified way to support all types of image segmentation and a variety of vision-language (VL) tasks. Further, our design enables seamless interactions across tasks at different granularities and brings mutual benefits by learning a common and rich pixel-level visual-semantic understanding space, without any pseudo-labeling. After pretraining on a mixed set of a limited amount of segmentation data and millions of image-text pairs, X-Decoder exhibits strong transferability to a wide range of downstream tasks in both zero-shot and finetuning settings. Notably, it achieves (1) state-of-the-art results on open-vocabulary segmentation and referring segmentation on eight datasets; (2) better or competitive finetuned performance to other generalist and specialist models on segmentation and VL tasks; and (3) flexibility for efficient finetuning and novel task composition (e.g., referring captioning and image editing). Code, demo, video, and visualization are available at https://x-decoder-vl.github.io.
translated by 谷歌翻译
Exploring dense matching between the current frame and past frames for long-range context modeling, memory-based methods have demonstrated impressive results in video object segmentation (VOS) recently. Nevertheless, due to the lack of instance understanding ability, the above approaches are oftentimes brittle to large appearance variations or viewpoint changes resulted from the movement of objects and cameras. In this paper, we argue that instance understanding matters in VOS, and integrating it with memory-based matching can enjoy the synergy, which is intuitively sensible from the definition of VOS task, \ie, identifying and segmenting object instances within the video. Towards this goal, we present a two-branch network for VOS, where the query-based instance segmentation (IS) branch delves into the instance details of the current frame and the VOS branch performs spatial-temporal matching with the memory bank. We employ the well-learned object queries from IS branch to inject instance-specific information into the query key, with which the instance-augmented matching is further performed. In addition, we introduce a multi-path fusion block to effectively combine the memory readout with multi-scale features from the instance segmentation decoder, which incorporates high-resolution instance-aware features to produce final segmentation results. Our method achieves state-of-the-art performance on DAVIS 2016/2017 val (92.6% and 87.1%), DAVIS 2017 test-dev (82.8%), and YouTube-VOS 2018/2019 val (86.3% and 86.3%), outperforming alternative methods by clear margins.
translated by 谷歌翻译
Benefiting from masked visual modeling, self-supervised video representation learning has achieved remarkable progress. However, existing methods focus on learning representations from scratch through reconstructing low-level features like raw pixel RGB values. In this paper, we propose masked video distillation (MVD), a simple yet effective two-stage masked feature modeling framework for video representation learning: firstly we pretrain an image (or video) model by recovering low-level features of masked patches, then we use the resulting features as targets for masked feature modeling. For the choice of teacher models, we observe that students taught by video teachers perform better on temporally-heavy video tasks, while image teachers transfer stronger spatial representations for spatially-heavy video tasks. Visualization analysis also indicates different teachers produce different learned patterns for students. Motivated by this observation, to leverage the advantage of different teachers, we design a spatial-temporal co-teaching method for MVD. Specifically, we distill student models from both video teachers and image teachers by masked feature modeling. Extensive experimental results demonstrate that video transformers pretrained with spatial-temporal co-teaching outperform models distilled with a single teacher on a multitude of video datasets. Our MVD with vanilla ViT achieves state-of-the-art performance compared with previous supervised or self-supervised methods on several challenging video downstream tasks. For example, with the ViT-Large model, our MVD achieves 86.4% and 75.9% Top-1 accuracy on Kinetics-400 and Something-Something-v2, outperforming VideoMAE by 1.2% and 1.6% respectively. Code will be available at \url{https://github.com/ruiwang2021/mvd}.
translated by 谷歌翻译
基于变压器的模型已在主要的视频识别基准上取得了最佳性能。与基于CNN的模型相比,这些模型受益于自我发项机制,显示出更强的建模长期依赖性能力。但是,大量的计算开销是由于自我注意力的二次复杂性在大量令牌之上,限制了现有的视频变压器在具有有限资源(例如移动设备)的应用程序中的使用。在本文中,我们将移动格式扩展到视频移动格式,该版本将视频体系结构分解为轻量级的3D-CNN,用于本地上下文建模,并以并行方式将变压器模块用于全局交互建模。为了避免通过计算视频中大量本地补丁之间的自我注意力而产生的重大计算成本,我们建议在变形金刚中使用很少的全球令牌(例如6)将整个视频中的整个视频用于与3D-CNN交换信息 - 注意机制。通过有效的全球时空建模,视频移动形式显着提高了替代轻型基线的视频识别性能,并且在各种视频识别任务上,低FLOP策略的其他有效CNN模型从500m到6G总鞋类胜过其他基于CNN的模型。值得注意的是,视频移动格式是第一个基于变压器的视频模型,它限制了1G失败范围内的计算预算。
translated by 谷歌翻译
对象检测器的复杂性过度权衡是资源约束视觉任务的关键问题。先前的作品强调了用有效的骨干实现的检测器。在这项工作中,研究了对检测负责人对提案处理的这种权衡的影响。假设提高的检测效率需要范式转移,朝着不平等的建议处理,将更多的计算分配给良好的建议,而不是贫穷的建议。这可以更好地利用可用的计算预算,从而为同一失败提供了更高的精度。我们将其作为一个学习问题提出,目的是将操作员分配给检测头的建议,以便将总计算成本受到限制,并且精确度最大。关键发现是,可以将这种匹配作为一个函数,该函数将每个提案嵌入到操作员的单速代码中。尽管此功能诱导了复杂的动态网络路由机制,但它可以由简单的MLP实现,并通过现成的对象检测器端到端学习。这种“动态建议处理”(DPP)显示出明确的计算复杂性的明确余量,表现出优于最先进的端到端对象检测器(DETR,稀疏R-CNN)。
translated by 谷歌翻译
图形离群值检测是一项具有许多应用程序的新兴但至关重要的机器学习任务。尽管近年来算法扩散,但缺乏标准和统一的绩效评估设置限制了它们在现实世界应用中的进步和使用。为了利用差距,我们(据我们所知)(据我们所知)第一个全面的无监督节点离群值检测基准为unod,并带有以下亮点:(1)评估骨架从经典矩阵分解到最新图形神经的骨架的14个方法网络; (2)在现实世界数据集上使用不同类型的注射异常值和自然异常值对方法性能进行基准测试; (3)通过在不同尺度的合成图上使用运行时和GPU存储器使用算法的效率和可扩展性。基于广泛的实验结果的分析,我们讨论了当前渠道方法的利弊,并指出了多个关键和有希望的未来研究方向。
translated by 谷歌翻译
考虑到过去几十年中开发的一长串异常检测算法,它们如何在(i)(i)不同级别的监督,(ii)不同类型的异常以及(iii)嘈杂和损坏的数据方面执行?在这项工作中,我们通过(据我们所知)在55个名为Adbench的55个基准数据集中使用30个算法来回答这些关键问题。我们的广泛实验(总共93,654)确定了对监督和异常类型的作用的有意义的见解,并解锁了研究人员在算法选择和设计中的未来方向。借助Adbench,研究人员可以轻松地对数据集(包括我们从自然语言和计算机视觉域的贡献)对现有基线的新提出的方法进行全面和公平的评估。为了促进可访问性和可重复性,我们完全开源的Adbench和相应的结果。
translated by 谷歌翻译
我们提出了GLIPV2,这是一个接地的VL理解模型,该模型既服务于本地化任务(例如,对象检测,实例分割)和视觉语言(VL)理解任务(例如VQA,图像字幕)。 GLIPV2优雅地将本地化预训练和视觉语言预训练(VLP)具有三个预训练任务:短语接地作为对检测任务的VL重新重新制定,区域词对比度学习作为新型的区域词对比度对比度对比学习任务,以及蒙面的语言建模。这种统一不仅简化了先前的多阶段VLP程序,而且还可以在本地化和理解任务之间实现相互利益。实验结果表明,在各种本地化和理解任务上,单个GLIPV2模型(所有模型权重)在SOTA性能附近实现。该模型还显示了(1)在开放式摄制对象检测任务上进行的强零射击和很少的自适应性能,以及(2)VL理解任务上的卓越接地能力。代码将在https://github.com/microsoft/glip上发布。
translated by 谷歌翻译